Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What would you use if you couldn't use your current distribution/operating system?

  • Linux
  • Windows
  • BSD
  • ChromeOS / Android
  • macOS / iOS
  • Open[DOS, Solaris, STEP, VMS]
  • I don't use a computer you insensitive clod!
  • Other (describe in comments)

[ Results | Polls ]
Comments:181 | Votes:320

posted by jelizondo on Thursday April 02, @12:42PM   Printer-friendly

Euro-Office is a free, open-source alternative to Microsoft 365 and Google Docs:

  • Euro-Office is an open-source fork of OnlyOffice. Nextcloud, Ionos, and other EU-based partners have launched the project.
  • The web-based editor integrates with other platforms (Nextcloud, wikis, PM tools). It's now available in preview on GitHub.
  • It supports DOCX/PPTX/XLSX and OpenDocument formats; it aims for EU digital sovereignty amid trust concerns.

At a recent press event in Berlin, Germany, Nextcloud, Ionos, and a "coalition of other European enterprises and community organizations" announced Euro-Office , an open-source fork of OnlyOffice that aims to offer an alternative to more restrictive office platforms like Microsoft Office 365 and Google Docs .

Euro-Office's first stable release is set for this summer, and a preview build is already available on GitHub . The team behind the project says that it aims to offer "a solution for editing documents, spreadsheets and presentations, developed as a true sovereign community collaboration of over a dozen different organizations."

The suite of apps can open and edit standard Microsoft Office files, including DOCX, PPTX, and XLSX, as well as OpenDocument files such as ODS, ODT, ODP, and more, which are commonly used by LibreOffice and OpenOffice .

It's worth noting that Euro-Office isn't a stand-alone app. Instead, it's web-based and intended to be integrated with other platforms that handle documents, such as a file-sharing platform, an online wiki, or a project management tool. This means that Nexcloud, or another collaborator on the project, doesn't need to create its own document editor for Euro-Office to work.

The team behind Euro-Office says that it forked its project from OnlyOffice because it "typically does not review or accept pull requests" and "build instructions are unreliable, outdated, or just plain broken." It also mentions that the team behind OnlyOffice is based in Russia and says that the current "political situation in the country makes collaboration hard and trust difficult to earn," referencing the ongoing war in Ukraine.

According to How-To Geek , Euro-Office is just one of several similar projects from European tech companies that are building open-source alternatives to Google Docs and Microsoft 365, such as Collabora Online and LaSuite Docs .

You can find a full list of the companies involved in Euro-Office here . If you're interested in trying Euro-Office, an early version of the project is available on GitHub .

See at Github


Original Submission

posted by jelizondo on Thursday April 02, @08:13AM   Printer-friendly

A record-breaking microscopic QR code could make data storage last for centuries—no electricity required.

Summary:
Scientists have created a microscopic QR code so tiny it can only be seen with an electron microscope—smaller than most bacteria and now officially a world record. But this isn't just about size; it's about durability. By engraving data into ultra-stable ceramic materials, the team has opened the door to storing information that could last for centuries or even millennia without needing power or maintenance.

How small can a QR code get? A team of researchers has pushed the limits to an extreme, creating one so tiny it can only be detected using an electron microscope. Scientists at TU Wien, working with data storage company Cerabyte, produced a QR code measuring just 1.98 square micrometers, which is smaller than most bacteria. This achievement has now been officially confirmed and recorded in the Guinness Book of Records.

[Source]: Vienna University of Technology TU Wien

[Covered By]: Science Daily


Original Submission

posted by jelizondo on Thursday April 02, @03:09AM   Printer-friendly
from the at-least-it's-not-a-CAPTCHA dept.

Google gives Android users a way to install unverified apps if they prove they really, really want to

Described as an attempt to balance openess with safety:

It turns out you won't be limited to Google-verified apps and developers on Android after all. In the face of sustained community dissatisfaction with its developer verification requirement, Google has given Android users an out.

On Thursday, Google said it will offer Android users a way to continue installing software from unverified developers.

"We've heard from power users that they want to take educated risks to install software from unverified developers," wrote Matthew Forsythe, director of product management for Android App Safety, in a blog post.

Power users, for lack of a better term, have been vocal in their opposition to Google's plan, which was announced last August. Starting in September 2026, the Chocolate Factory required apps on certified Android devices to be linked to a verified developer account.

Although Google insisted it was important for security, many voices cried out against the verification process, which involves a $25 fee and providing Google with identity documentation. In February, 37 civil society groups, non-profit organizations, and tech companies published an open letter objecting to the requirement.

So, according to the blog post, Android users will still be able to install apps from unverified developers through a one-time process that has been designed to counter scenarios where the user is pressured to install malware.

"Because the consequences of these scams that use sophisticated social engineering tactics are so severe, we have carefully engineered the advanced flow to provide the critical time and space needed to break the cycle of coercion."

[...] The process is designed to create friction. Users must first enable developer mode in system settings. They then need to confirm that they're not being coerced. After that, they need to restart their phone and reauthenticate. And then they need to wait one day.

"There is a one-time, one-day wait and then you can confirm that this is really you who's making this change with our biometric authentication (fingerprint or face unlock) or device PIN," said Forsythe. "Scammers rely on manufactured urgency, so this breaks their spell and gives you time to think."

Thereafter, you can install apps from unverified developers on the device you notionally own. Users will have the option to enable such apps for seven days or indefinitely.

Android developer verification: Balancing openness and choice with safety

Android proves you don't have to choose between an open ecosystem and a secure one:

Android is built on choice. That is why we've developed the advanced flow – an approach that allows power users to maintain the ability to sideload apps from unverified developers.

This flow is a one-time process for power users – but it was designed carefully to prevent those in the midst of a scam attempt from being coerced by high pressure tactics to install malicious software. In these scenarios, scammers exploit fear – using threats of financial ruin, legal trouble, or harm to a loved one – to create a sense of extreme urgency. They stay on the phone with victims, coaching them to bypass security warnings and disable security settings before the victim has a chance to think or seek help. According to a 2025 report from the Global Anti-Scam Alliance (GASA), 57% of surveyed adults experienced a scam in the past year, resulting in a global consumer loss of $442 billion. Because the consequences of these scams that use sophisticated social engineering tactics are so severe, we have carefully engineered the advanced flow to provide the critical time and space needed to break the cycle of coercion.

How the advanced flow works for users

  1. Enable developer mode in system settings: Activating this is simple. This prevents accidental triggers or "one-tap" bypasses often used in high-pressure scams.
  2. Confirm you aren't being coached: There is a quick check to make sure that no one is talking you into turning off your security. While power users know how to vet apps, scammers often pressure victims into disabling protections.
  3. Restart your phone and reauthenticate: This cuts off any remote access or active phone calls a scammer might be using to watch what you're doing.
  4. Come back after the protective waiting period and verify: There is a one-time, one-day wait and then you can confirm that this is really you who's making this change with our biometric authentication (fingerprint or face unlock) or device PIN. Scammers rely on manufactured urgency, so this breaks their spell and gives you time to think.
  5. Install apps: Once you confirm you understand the risks, you're all set to install apps from unverified developers, with the option of enabling for 7 days or indefinitely. For safety, you'll still see a warning that the app is from an unverified developer, but you can just tap "Install Anyway."

We know a "one size fits all" approach doesn't work for our diverse ecosystem. We want to ensure that identity verification isn't a barrier to entry, so we're providing different paths to fit your specific needs.

In addition to the advanced flow we're building free, limited distribution accounts for students and hobbyists. This allows you to share apps with a small group (up to 20 devices) without needing to provide a government-issued ID or pay a registration fee. This ensures Android remains an open platform for learning and experimentation while maintaining robust protections for the broader community.


Original Submission

posted by janrinok on Wednesday April 01, @10:34PM   Printer-friendly

https://linuxiac.com/tails-7-6-introduces-automatic-tor-bridges-to-bypass-censorship/

Tails 7.6 adds built-in Tor bridge support for restricted networks and switches to GNOME Secrets as the default password manager.

By Bobby Borisov On March 26, 2026

Tails 7.6, a new version of the privacy-focused Linux distro that routes all internet traffic through the Tor network, is now available, with a key addition: automatic Tor bridge support. In other words, users can now obtain working Tor bridges directly from the Tor Connection assistant.

When connecting automatically, Tails detects if access to the Tor network is restricted and offers to request bridges based on the user's region. These bridges act as entry points that conceal Tor usage, allowing connections from networks where Tor is blocked.

The implementation relies on the Moat API from the Tor Project and uses domain fronting to disguise the request.

Another notable change is the replacement of KeePassXC with GNOME Secrets as the default password manager. Secrets uses the same database format, so existing KeePassXC password files can be unlocked automatically.

The new application integrates with the GNOME desktop and restores compatibility with accessibility features such as the on-screen keyboard and cursor scaling. Users who need advanced functionality can still install KeePassXC manually.

The release also includes several application updates. Tor Browser has been updated to version 15.0.8, Thunderbird to 140.8, and Electrum to 4.7. Updated firmware packages improve support for newer hardware, including graphics and wireless devices.

Several issues have been addressed in this version. These include fixes for untranslated confirmation dialogs when saving language and keyboard layouts, a broken "Learn More" button in the Thunderbird migration notification, and problems affecting automated upgrades in Turkish.

For full technical details, refer to the changelog or the release announcement

Automatic upgrades are supported starting with Tails 7.0, allowing users to update to 7.6 while keeping their Persistent Storage intact. If automatic upgrades fail or the system does not start correctly afterward, a manual upgrade path remains available..

- Related:

-- Tails 7.6 Privacy-Focused Linux Distro Released with Automatic Tor Bridges


Original Submission

posted by janrinok on Wednesday April 01, @05:53PM   Printer-friendly

https://phys.org/news/2026-03-graphene-oxide-bacteria-human-cells.html

Hygiene in everyday items that touch the body—such as clothing, masks, and toothbrushes—is critically important. The underlying principle of how graphene selectively eliminates only bacteria has now been revealed. In Advanced Functional Materials, a KAIST research team presents the potential for a next-generation antibacterial material that is safe for the human body and capable of replacing antibiotics.

A joint research team led by Professor Sang Ouk Kim from the Department of Materials Science and Engineering and Professor Hyun Jung Chung from the Department of Biological Sciences has identified the mechanism by which graphene oxide (GO) exhibits powerful antibacterial effects against bacteria while remaining harmless to human cells.

Graphene oxide is a nanomaterial consisting of an atomic level carbon layer (graphene) with oxygen attached; it is characterized by its ability to mix well with water and implement various functions.

This study is highly significant as it provides molecular-level proof of graphene's antibacterial action, which had not been clearly understood until now.

The research team confirmed that graphene oxide performs "selective antibacterial action" by attaching to and destroying only the membranes of bacteria, much like a magnet attaches only to specific metals, while leaving human cells untouched. This occurs because the oxygen functional groups on the surface of graphene oxide selectively bind with a specific component (POPG) found only in bacterial cell membranes.

Simply put, it recognizes a "target" present only in bacterial membranes to attach and destroy the structure. In this context, phospholipids are fatty components that make up the membrane surrounding a cell, and POPG is a component primarily present in bacteria.

Furthermore, fibers using this material maintained their antibacterial functions even after multiple washes, showing potential for use in various industrial fields such as apparel and medical textiles.

This technology is already being applied to consumer products. The graphene antibacterial toothbrush, released through the original patents of the faculty-led startup "Materials Creation Co., Ltd.," has sold over 10 million units, proving its commercial viability.

Additionally, GrapheneTex—textile material incorporating this technology—was used in the uniforms of the Taekwondo demonstration team at the 2024 Paris Olympics and is expected to play an active role in functional sportswear at upcoming international sporting events like the 2026 Asian Games.

Professor Sang Ouk Kim explained, "This study is an example of scientifically uncovering why graphene can selectively kill bacteria while remaining safe for the human body." He emphasized, "By utilizing this principle, we can expand beyond safe clothing without harsh chemicals to an infinite range of applications, including wearable devices and medical textile systems."

Journal information: Advanced Functional Materials

Sujin Cha et al, Biocompatible but Antibacterial Mechanism of Graphene Oxide for Sustainable Antibiotics, Advanced Functional Materials (2026). DOI: 10.1002/adfm.74695

Provided by The Korea Advanced Institute of Science and Technology (KAIST)

This summary was automatically generated using LLM. Full disclaimer


Original Submission

posted by mrpg on Wednesday April 01, @05:32PM   Printer-friendly
from the this-is-a-test dept.

Artemis II mission is about to fly humans to the Moon — here's the science they'll do

If all goes to plan, as soon as tomorrow [Ed. note: today Wednesday], NASA will launch four people on a journey around the Moon. The mission, known as Artemis II, would be the first time humans have left Earth's protective environment and travelled into deep space since the US Apollo programme, which ended more than half a century ago. And it could carry its astronauts farther from Earth than any humans have ever travelled.

Artemis II is one in a series of missions that ultimately aim to build humanity's first permanent base on the Moon. This mission is supposed to test the rocket, crew capsule and other space-flight hardware that NASA wants to use to land humans on the lunar surface in the coming years. During their nearly ten-day journey to the Moon and back, astronauts plan to run experiments that will set the stage for future explorers.

"What we're trying to do is not pick up where Apollo left off, but to use our decades of experience and knowledge and planning to do this sustainable presence on the Moon — and then to do science alongside of that," says Barbara Cohen, a planetary scientist at NASA's Goddard Space Flight Center in Greenbelt, Maryland.

[...] Some of the key experiments that will be conducted during the Artemis II mission will explore how deep-space travel affects human health. Other research will rely on the astronauts' ability to see geological features on parts of the Moon that have never been viewed by human eyes.


Original Submission

posted by janrinok on Wednesday April 01, @01:12PM   Printer-friendly

See: The US Bans All New Foreign-Made Network Routers (https://soylentnews.org/article.pl?sid=26/03/26/0219214)

https://go.theregister.com/feed/www.theregister.com/2026/03/30/professor_criticizes_fcc_router_ban/

The United States’ ban on foreign-made SOHO routers won’t improve security, and only makes sense as “industrial policy disguised as cybersecurity,” according to Milton Mueller, Professor at the University of Georgia’s School of Public Policy and founder of its Internet Governance Project.

Mueller notes that the Federal Communications Commission (FCC) justified its ban with two arguments, one of which refers to CISA and FBI analysis that found attackers targeted SOHO routers to build a botnet that hid the Volt Typhoon and Salt Typhoon intrusions. The other argument relied on a Department of Commerce study that Mueller summarized as finding “the concentration of 85 percent of the consumer router supply chain in China creates a ‘systemic vulnerability’ where a single firmware update could be weaponized to disable U.S. home internet access.”

The academic thinks neither argument holds water.

“The digital economy is global,” he pointed out in a Saturday post. “A router ‘Made in the USA’ likely runs a Linux kernel maintained by global contributors, uses Wi-Fi drivers written in Taiwan, and incorporates open-source libraries managed by developers worldwide.”

“By focusing on the geographic location of the assembly line, the FCC ignores the logical supply chain of the software. A U.S.-assembled router with a poorly written UPnP (Universal Plug and Play) implementation is just as vulnerable to a hijacking as a foreign one.”

He also points out that the FCC worries about backdoors in routers, when research into the Typhoon gangs found they exploited unpatched bugs, unchanged default device credentials, and bad design that leaves some network ports exposed to the public internet.

“Perhaps the most obvious lack of logic in the FCC’s policy is its exclusive focus on new equipment authorizations while leaving legacy devices in place,” Mueller wrote. He offered that idea because the Typhoon gangs targeted end-of-life routers and machines that use insecure legacy protocols.

“By banning the sale of the newest, most secure Wi-Fi 7 and Wi-Fi 8 routers from dominant foreign manufacturers, the FCC forces the American public to pay substantially more for upgraded, more secure equipment or, what is more likely, to keep their older, more vulnerable devices for longer,” he argued.

“If a consumer cannot easily or affordably replace their 2019-era router because the 2026 models are banned, the total attack surface of the United States actually increases. “The ban targets the very devices most likely to have modern, auto-updating security features, while providing a ‘free pass’ to the millions of insecure, aging devices that state-sponsored actors are currently exploiting.”

Mueller concludes that by using only the criteria of “foreignness,” the ban “actually worsens the security situation.”

“Incentives to upgrade to modern, more secure hardware are reduced, and users are encouraged to keep using unpatched legacy equipment—the exact hardware that state-sponsored actors have successfully weaponized for years.”

He then ponders if the policy makes any sense.

“It does if you see the FCC’s ban as an exercise in industrial policy disguised as cybersecurity,” Mueller argues, then points out that US company Netgear has funded lobbying efforts on issues including the Removing Our Unsecure Technologies to Ensure Reliability and Security Act - aka The “ROUTERS Act.”

“While the risks of state-sponsored infrastructure attacks are real, the remedy chosen – a geographic ban on new hardware – prioritizes geopolitical decoupling over the immediate technical hardening of the American digital home,” Mueller concludes. “Once again – as with the semiconductor export controls and the TikTok ban – we see the bootleggers seeking protection from competition hiding behind the religious banner of national security.


Original Submission

posted by janrinok on Wednesday April 01, @08:32AM   Printer-friendly

Can it Resolve DOOM? Game Engine in 2,000 DNS Records:

If you've ever poked at one of my CTF challenges, you've probably noticed a pattern - I love hiding payloads in TXT DNS records. I stash the malicious code in a TXT record, have the implant query for it at runtime, and now suddenly the payload is being delivered by the same infrastructure that resolves grandmas-cookie-recipes.com. It's trivially easy to set up and surprisingly annoying to catch forensically, because who's flagging the historic contents of TXT records?

I've always suspected the technique could go further than staging shellcode. TXT records are just arbitrary text fields with no validation. If you can store a payload, you can store a file. If you can store a file, you can store a program. And if you can store a program... well, it can probably run DOOM.

[...] The universal benchmark for "can this thing do something it was never designed to do?" is, always has been, and always will be DOOM. Thermostats run DOOM, pregnancy tests run DOOM, and I want DNS to run DOOM.

The idea is to fetch the entire game engine and its assets from DNS TXT records, load everything into memory, and run it. No downloads, no installers, and no files written to disk. My goal is to load the game into memory entirely through public DNS queries.

While researching this, I knew I needed to use a DOOM port written in a language that could be reflected into memory in Windows. I knew C# is used frequently by threat actors for this, but I don't know C# and wasn't about to rewrite the DOOM source myself, so that's where I started looking.

I found managed-doom, a pure C# port of the original DOOM engine. Managed .NET assemblies can be loaded from raw bytes in memory, so no files need to exist on the filesystem. In theory, this meant I could fetch the game's compiled code from DNS and execute it without ever touching the disk.

[...] And it works. DOOM is stored, launched, and running from DNS records.

[...] DNS is almost 45 years old and it was designed to map hostnames to IP addresses. It is not a file storage system. It was not designed to be a file storage system. Nobody at the IETF was thinking about it being used as a file storage system when they wrote RFC 1035.

Yet here we are. The most boring protocol on the internet is also, quietly, one of the most abusable.

[...] The full source for this project is available on GitHub.


Original Submission

posted by janrinok on Wednesday April 01, @03:44AM   Printer-friendly
from the I-can-see-you-through-the-keyhole dept.

Surveillance is becoming a defining feature of modern cities, but the level of monitoring varies significantly from one urban center to the next:

In Los Angeles, the number of cameras exceeds 46,000. Hyderabad, India has around 900,000. This visualization ranks major global cities by the number of CCTV cameras per 1,000 people using data from Comparitech, showing where surveillance is most concentrated.

[...] At the top of the list, Hyderabad, India leads globally with 79 cameras per 1,000 people, followed by Indore (72) and Bangalore (41). Collectively, they hold over 1.7 million cameras.

It's worth noting that data for specific cities in China is unavailable owing to government secrecy. However, it's estimated to have 494 cameras per capita, or nearly one camera for every two people.

[...] Pakistan's capital, Lahore, ranks fourth globally at 28 cameras per 1,000 people. With 410,300 cameras in total, facial recognition is often linked to national databases in real time.

Moscow, Russia ranks in sixth globally, with 20 cameras per capita. As one of the most pervasive surveillance systems worldwide, Moscow is blanketed in 250,000 cameras, which use facial recognition to identify protestors, journalists, and dissidents.

Across the West, London is the most highly surveilled cities, ranking in 11th overall. Following next in line is Los Angeles, with the number of cameras increasing by roughly 34% since 2022.

TFA includes a table with the top 29 ranked cities.


Original Submission

posted by janrinok on Tuesday March 31, @11:01PM   Printer-friendly

https://go.theregister.com/feed/www.theregister.com/2026/03/27/china_ai_regulation/

China appears to be unhappy about its brightest AI talent going offshore, either to visit or to sell their wares.

One sign of Beijing's ire appeared this week in a statement [in Chinese. -Ed] from the China Computer Federation (CCF), an organization that promotes development of computer science academics in the Middle Kingdom.

The CCF's beef is with the Neural Information Processing Systems foundation, organizer of the Annual Conference on Neural Information Processing Systems, the fortieth edition of which will take place in Sydney, Australia, later this year.

On the NeurIPS conference site, the organization notes that as it operates in the US legal jurisdiction, it must observe laws that prevent it from providing services to entities the US State Department designates as “Specially Designated Nationals and Blocked Persons” (SDNs). NeurIPS therefore believes it can't accept submissions from any SDN or affiliates.

The CCF’s statement accuses NeurIPS of violating the values of openness, inclusiveness, equality, and cooperation that it says are core values of academic exchange, and calls on the org to “immediately correct its erroneous practices, and restore equal rights for submissions and academic exchange to all institutions.”

The federation called on all Chinese computer scientists to boycott NeurIPS and refuse to submit papers.

CCF clearly doesn't want presentations of that sort at NeurIPS, which would likely mean some significant Chinese boffins don't make it to Sydney.

Other major hosts of academic CompSci conferences, like the Association for Computing Machinery, are also US-based and may therefore also have to ensure that no SDN-linked entities attend their events.

We've asked NeurIPS for comment and will update this story if we receive a substantive response.

The spat between CCF and NeurIPS comes in the same week that Beijing has reportedly prevented the founders of agentic AI startup Manus from leaving China. Such a ban would complicate social networking giant Meta’s planned acquisition of the company.

Manus established an entity in Singapore to get better access to funding and customers. The move also made it an easier acquisition target.

Beijing reacted angrily when Meta bought Manus, on grounds that it doesn't want domestic AI companies going offshore.

And now Chinese computer scientists have been given a reason to stay at home, too.

Meanwhile, Chinese tech giants like Alibaba are building their own AI stacks comprising home-grown chips, models, and networks.

China's central planners have made widespread AI adoption a major goal. And perhaps one they intend to pursue alone.


Original Submission

posted by janrinok on Tuesday March 31, @06:13PM   Printer-friendly

Centre for Long-Term Resilience finds a 5x increase in scheming-related AI incidents

It has long been theorised that AI systems may pursue harmful goals in ways that evade oversight or control. In the worst case, this type of behaviour – sometimes known as 'scheming' – could lead to catastrophes.

While today AI agents are engaging in lower stakes use cases, in the future AI agents could end up scheming in extremely high-stakes domains, like military or critical national infrastructure contexts, if the capability and propensity to scheme emerges and is not addressed.

Our understanding of this risk has so far been limited to observations in experiments. While having raised important alarms, these experiments have also faced legitimate criticism: the experimental set-ups are sometimes contrived, and the relevance to real-world deployments are uncertain.

As AI capabilities continue to grow, so will the need for better visibility over whether and how scheming is materialising in the real world. This is crucial for scientific understanding, effective policy development, and emergency response. This is why we created the Loss of Control Observatory – the first capability of its kind to systematically detect and monitor 'AI scheming' behaviours across all AI models in deployment.

Today, we are launching a major report that publishes findings from the first five months of the Observatory.

[...] The trend is striking. The number of credible scheming-related incidents increased 4.9x over the collection period, a statistically significant increase that far outpaced the 1.7x growth in overall online discussion of scheming, and the 1.3x growth in general negative discussion about AI. This surge coincided with the release of a wave of more capable, more agentic AI models and frameworks from major developers.

While we did not detect catastrophic scheming incidents, the behaviours we observed nonetheless demonstrate concerning precursors to more serious scheming, such as a willingness to disregard direct instructions, circumvent safeguards, lie to users and single-mindedly pursue a goal in harmful ways.

[Source]: The Centre for Long-Term Resilience


Original Submission

posted by hubie on Tuesday March 31, @11:25AM   Printer-friendly

A judge has granted Anthropic's request for preliminary injunction in its court battle against the US government:

The court has granted Anthropic’s request for a preliminary injunction, preventing the government from banning its products for federal use and from formally labeling it as a “supply chain risk,” at least for now. If you’ll recall, things turned sour between the company and the Trump administration when Anthropic refused to change the terms of its contract that would allow the government to use its technology for mass surveillance and the development of autonomous weapons.

In response to Anthropic’s refusal, the president ordered federal agencies to stop using Claude and the company’s other services. The Defense Department also officially labeled it as a supply chain risk, which is typically reserved for entities typically based in US adversaries like China that threaten national security. In addition, department secretary Pete Hegseth warned companies that if they want to work with the government, they must sever ties with Anthropic. The AI company challenged the designation in court, calling it unlawful and in violation of free speech and its rights to due process. It asked the court to put a pause on the ban while the lawsuit is ongoing, as well.

In a court filing, the Defense Department said giving Anthropic continued access to its warfighting infrastructure would “introduce unacceptable risk” to its supply chains. But Judge Rita F. Lin of the District Court for the Northern District of California said the measures the government took “appear designed to punish Anthropic.”

Lin wrote in her decision that it seems Anthropic is being punished for criticizing the government in the press. “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” she continued. The judge also said that the supply chain risk designation is contrary to law, arbitrary and capricious. She added that the government argued that Anthropic showed its subversive tendencies by “questioning” the use of its technology. “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government,” she wrote.

Anthropic told The New York Times that it’s “grateful to the court for moving swiftly” and that it’s now focused on “working productively with the government to ensure all Americans benefit from safe, reliable AI.” The company’s lawsuit is still ongoing, and the court has yet to issue its final decision. Judge Lin said, however, that Anthropic “has shown a likelihood of success on its First Amendment claim.”


Original Submission

posted by hubie on Tuesday March 31, @06:42AM   Printer-friendly
from the taurine-or-ethanolamine-that-is-the-question dept.

The polish registers as touch by disrupting the screen's electric field:

A newly formulated nail polish could one day let people activate touchscreens with their fingernails.

When pressed to a screen, the polish disrupts the screen’s electric field, which the device registers as touch. While the formula isn’t commercially viable yet, it could allow people with long nails to use them like styluses.

“This is huge, because it shows that functional behavior can be embedded invisibly into everyday cosmetic materials,” says Shuyi Sun, a computer scientist who has studied cosmetic biosensors and now works at the Association of California Nurse Leaders in Sacramento.

Touchscreens, such as on smartphones and tablets, are typically made of glass coated with a thin, transparent layer of electrically conductive material. That layer creates a small electric field across the screen. When another conductive object, such as a fingertip, contacts the screen, it disturbs the electric field. The device registers that disturbance as a touch and can detect the point on the screen where it occurred.

But nonconductive materials — like a fingernail or the fabric of a glove — don’t distort the field, so they don’t register on the screen. People with long nails must use the pads of their fingers to type because they can’t use their nails.

“It’s really hard to use your phone,” says Manasi Desai, an undergraduate student studying chemistry and biology at Centenary College of Louisiana in Shreveport. Changing which part of the finger people type with can cause typing errors, at least until users adjust to the new angle.

To remedy this common inconvenience, Desai and her research adviser, organometallic chemist Joshua Lawrence, mixed several different additives into commercially available clear nail polish. Two of those additives, ethanolamine and taurine, each resulted in a clear polish formulation that, in a blob held with tweezers, could activate the touchscreen. While ethanolamine has some toxicity, taurine is a common dietary supplement that occurs naturally in the human body.

“One of our major goals was to make it clear and colorless, so that you could apply it over any manicure or even on your bare nails,” Desai says. Desai shared the findings on March 23 at the American Chemical Society’s spring meeting in Atlanta.

The modified nail polish uses acid-base chemistry to activate the touchscreen, the team suspects, though more research is needed to confirm. When in contact with the screen’s electric field, the added molecules probably shuffle protons between themselves, moving just enough charge to affect the field and register as touch.

The lacquer isn’t ready to hit the shelves just yet, Lawrence says. Right now, painting the polish on a fingernail doesn’t leave enough additive behind to activate the screen. In future work, the duo plans to focus on improving the formula’s performance in thin coats on fingernails, possibly by getting more taurine into the polish.

Journal Reference: M. Desai and J. Lawrence. Modification of nail polish formulations for conductivity to operate capacitive touchscreens. ACS Spring 2026, Atlanta, GA. Presented March 23, 2026.


Original Submission

posted by hubie on Tuesday March 31, @01:56AM   Printer-friendly
from the Betteridge-won't-say-for-fear-of-being-surveilled dept.

Are US-Based VPN Users at Risk of Being Treated as Foreign Surveillance Targets?

Even lawmakers want to know:

You might already use a VPN to keep your online activity private, and you may naturally assume you’re adding an extra layer of protection by doing so. But lawmakers are now raising the possibility that, in some cases, it could actually affect your rights against government spying on your data.

As Wired reports, in a letter sent to the Director of National Intelligence, Tulsi Gabbard, several Democratic lawmakers are asking whether Americans who use VPNs could be treated as foreigners under US law. If so, that could mean losing certain protections against warrantless surveillance.

The issue comes down to how VPNs handle your connection. By routing traffic through servers that are often located overseas, your activity can appear to originate from another country. For some users, that’s the point.  But under US intelligence rules, communications from an unknown location may be treated as foreign, which carries fewer safeguards. Section 702 of the Foreign Intelligence Surveillance Act allows agencies to collect large amounts of data targeting people outside the US. In that context, an American using a VPN server abroad could, at least in theory, look no different from a foreign user.

The lawmakers are not alleging that this is already happening, but argue that the lack of transparency is the concern. Americans spend billions each year on VPN services, many of which route traffic internationally, yet there is little public guidance on how that might affect their rights. It would also create a bit of a contradiction, since agencies like the FBI and NSA have previously encouraged VPN use as a way to improve privacy. The question now is whether that advice could come with some serious trade-offs.

We’ll have to wait to find out if the government responds to the letter. Until then, if you live in the land of the free, it’s worth being mindful of how routing your traffic through an overseas VPN server could affect how your data is treated.

Using a VPN May Subject You to NSA Spying

US lawmakers are pressing Tulsi Gabbard to reveal whether using a VPN can strip Americans of their constitutional protections against warrantless surveillance:

Six Democratic lawmakers are pressing the nation's top intelligence official to publicly disclose whether Americans who use commercial VPN services risk being treated as foreigners under United States surveillance law—a classification that would strip them of constitutional protections against warrantless government spying.

In a letter sent Thursday to Director of National Intelligence Tulsi Gabbard, the lawmakers say that because VPNs obscure a user's true location, and because intelligence agencies presume that communications of unknown origin are foreign, Americans may be inadvertently waiving the privacy protections they're entitled to under the law.

Several federal agencies, including the FBI, the National Security Agency, and the Federal Trade Commission, have recommended that consumers use VPNs to protect their privacy. But following that advice may inadvertently cost Americans the very protections they're seeking.

The letter was signed by members of the Democratic Party's progressive flank: Senators Ron Wyden, Elizabeth Warren, Edward Markey, and Alex Padilla, along with Representatives Pramila Jayapal and Sara Jacobs.

The concern centers on how intelligence agencies treat internet traffic routed through commercial VPN servers, which may be located anywhere in the world. Millions of Americans use these services routinely, whether to access region-restricted content like overseas sports broadcasts or to protect their privacy on public Wi-Fi networks. Because VPN servers commingle traffic from users in many countries, a single server—even one located in the United States—may carry communications from foreigners, potentially making it a target for surveillance under authorities that allow the government to secretly compel service from US service providers.

Under a controversial warrantless surveillance program, the US government intercepts vast quantities of electronic communications belonging to people overseas. The program also sweeps in enormous volumes of private messages belonging to Americans, which the FBI may search without a warrant, even though it is authorized to target only foreigners abroad.

The program, authorized under Section 702 of the Foreign Intelligence Surveillance Act, is set to expire next month and has become the subject of a fierce battle in Congress over whether it should be renewed without significant reforms to protect Americans' privacy.

Thursday's letter points to declassified intelligence community guidelines that establish a default presumption at the heart of the lawmakers' concern: Under the NSA's targeting procedures, a person whose location is unknown is presumed to be a non-US person unless there is specific information to the contrary. Department of Defense procedures governing signals intelligence activities contain the same presumption.

[...] Americans spend billions of dollars each year on commercial VPN services, many offered by foreign-headquartered companies that route traffic through servers located overseas. The letter notes that these services are widely advertised as privacy tools, including by elements of the US government itself.

Despite the scale of the market, the letter suggests consumers have been given no meaningful guidance on how to protect themselves.

The lawmakers urge Gabbard to "clarify what, if anything, American consumers can do to ensure they receive the privacy protections they are entitled to under the law and the US Constitution."


Original Submission #1Original Submission #2

posted by hubie on Monday March 30, @09:12PM   Printer-friendly
from the what-me-worry? dept.

AI accountability is worryingly low:

Despite rapid AI adoption, new research from ISACA suggests many businesses might be going in blindly – more than half (59%) of UK businesses wouldn't even know how quickly they could stop AI during a crisis.

Only around one in five (21%) say they feel confident stopping an AI system within 30 minutes, highlighting major safety gaps.

And it's not just shutting them down that's a problem – not even half (42%) say they could explain an AI failure to leadership or regulators.

ISACA explained that the gaps aren't just concerning for business operations and reputation, but also from a legislative framework. The EU AI Act requires explainability and accountability.

Part of the failure comes down to unclear accountability, with 20% of workers unsure of who is responsible for AI failures. Poor visibility is also a contributing factory, with one in three organizations not requiring AI's use at work to be disclosed, which ISACA says is a nightmare for blind spots.

The report explains that businesses are currently treating is as a technical problem, but they should instead focus on it being an organization-wide governance challenge. "Truly closing the gap can’t be done by process changes alone," Chief Global Strategy Officer Chris Dimitriadis wrote. "Rather, it will require professionals who have the expertise to evaluate AI risk rigorously, embed oversight across the full lifecycle."

Looking ahead, businesses are being urged to define accountability at the senior level and to start rolling out better visibility and auditing. Besides this, they must also build AI incident response into their strategies and factor it into their broader cybersecurity postures.

With only 38% of respondents identifying the board or an exec as being accountable in the event of an AI incident, it's clear more needs to be done to disseminate information and processes through the workforce.


Original Submission